首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3656篇
  免费   290篇
  国内免费   4篇
电工技术   73篇
综合类   19篇
化学工业   870篇
金属工艺   81篇
机械仪表   63篇
建筑科学   308篇
矿业工程   19篇
能源动力   102篇
轻工业   270篇
水利工程   27篇
石油天然气   3篇
无线电   317篇
一般工业技术   797篇
冶金工业   240篇
原子能技术   14篇
自动化技术   747篇
  2024年   4篇
  2023年   58篇
  2022年   34篇
  2021年   146篇
  2020年   118篇
  2019年   98篇
  2018年   127篇
  2017年   111篇
  2016年   159篇
  2015年   165篇
  2014年   189篇
  2013年   264篇
  2012年   256篇
  2011年   311篇
  2010年   260篇
  2009年   208篇
  2008年   208篇
  2007年   208篇
  2006年   158篇
  2005年   149篇
  2004年   100篇
  2003年   87篇
  2002年   79篇
  2001年   50篇
  2000年   44篇
  1999年   52篇
  1998年   51篇
  1997年   31篇
  1996年   44篇
  1995年   29篇
  1994年   19篇
  1993年   19篇
  1992年   9篇
  1991年   11篇
  1990年   10篇
  1989年   13篇
  1988年   5篇
  1987年   3篇
  1986年   7篇
  1985年   7篇
  1984年   3篇
  1982年   4篇
  1981年   3篇
  1979年   8篇
  1977年   4篇
  1976年   5篇
  1975年   3篇
  1974年   3篇
  1973年   5篇
  1970年   3篇
排序方式: 共有3950条查询结果,搜索用时 15 毫秒
71.
Quantitative parameter mapping in MRI is typically performed as a two‐step procedure where serial imaging is followed by pixelwise model fitting. In contrast, model‐based reconstructions directly reconstruct parameter maps from raw data without explicit image reconstruction. Here, we propose a method that determines T1 maps directly from multi‐channel raw data as obtained by a single‐shot inversion‐recovery radial FLASH acquisition with a Golden Angle view order. Joint reconstruction of a T1, spin‐density and flip‐angle map is formulated as a nonlinear inverse problem and solved by the iteratively regularized Gauss‐Newton method. Coil sensitivity profiles are determined from the same data in a preparatory step of the reconstruction. Validations included numerical simulations, in vitro MRI studies of an experimental T1 phantom, and in vivo studies of brain and abdomen of healthy subjects at a field strength of 3 T. The results obtained for a numerical and experimental phantom demonstrated excellent accuracy and precision of model‐based T1 mapping. In vivo studies allowed for high‐resolution T1 mapping of human brain (0.5–0.75 mm in‐plane, 4 mm section thickness) and liver (1.0 mm, 5 mm section) within 3.6–5 s. In conclusion, the proposed method for model‐based T1 mapping may become an alternative to two‐step techniques, which rely on model fitting after serial image reconstruction. More extensive clinical trials now require accelerated computation and online implementation of the algorithm. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 254–263, 2016  相似文献   
72.
Dennis Dobler  Markus Pauly 《TEST》2018,27(3):639-658
The Mann–Whitney effect is an intuitive measure for discriminating two survival distributions. Here we analyse various inference techniques for this parameter in a two-sample survival setting with independent right-censoring, where the survival times are even allowed to be discretely distributed. This allows for ties in the data and requires the introduction of normalized versions of Kaplan–Meier estimators from which adequate point estimates are deduced. Asymptotically exact inference procedures based on standard normal, bootstrap, and permutation quantiles are developed and compared in simulations. Here, the asymptotically robust and—under exchangeable data—even finitely exact permutation procedure turned out to be the best. Finally, all procedures are illustrated using a real data set.  相似文献   
73.
74.
Single point incremental forming: state-of-the-art and prospects   总被引:1,自引:0,他引:1  
Incremental sheet metal forming in general and Single Point Incremental Forming (SPIF) specifically have gone through a period of intensive development with growing attention from research institutes worldwide. The result of these efforts is significant progress in the understanding of the underlying forming mechanisms and opportunities as well as limitations associated with this category of flexible forming processes. Furthermore, creative process design efforts have enhanced the process capabilities and process planning methods. Also, simulation capabilities have evolved substantially. This review paper aims to provide an overview of the body of knowledge with respect to Single Point Incremental Forming. Without claiming to be exhaustive, each section aims for an up-to-date state-of-the-art review with corresponding conclusions on scientific progress and outlook on expected further developments.  相似文献   
75.
GOST 34.10 is Russia's DSA. Like its US counterpart, GOST is an ElGamal-like signature scheme used in Schnorr mode. It is similar to NIST DSA in many aspects. In this paper we will overview GOST 34.10 and discuss the three main differences between the two algorithms, (i) GOST's principal design criterion does not seem to be computational efficiency: the algorithm is 1.6 times slower than the DSA and produces 512-bit signatures. This is mainly due to the usage of the modulus q which is at least 254 bits long. During verification, modular inverses are computed by exponentiation (while the Extended Euclidian algorithm is roughly 100 times faster for this parameter size) and the generation of the public parameters is much more complicated than in the DSA. This choice of the parameters makes GOST 34.10 very secure. (ii) GOST signers do not have to generate modular inverses as the basic signature equation is s = xr + mk (mod q) instead of (mod q). (iii) GOST's hash function (the Russian equivalent of the SHA) is the standard GOST 34.11 which uses the block cipher GOST 28147 (partially classified) as a building block. The hash function will be briefly described. Copyright  相似文献   
76.
Worst-case execution time (WCET) analysis is concerned with computing a precise-as-possible bound for the maximum time the execution of a program can take. This information is indispensable for developing safety-critical real-time systems, e. g., in the avionics and automotive fields. Starting with the initial works of Chen, Mok, Puschner, Shaw, and others in the mid and late 1980s, WCET analysis turned into a well-established and vibrant field of research and development in academia and industry. The increasing number and diversity of hardware and software platforms and the ongoing rapid technological advancement became drivers for the development of a wide array of distinct methods and tools for WCET analysis. The precision, generality, and efficiency of these methods and tools depend much on the expressiveness and usability of the annotation languages that are used to describe feasible and infeasible program paths. In this article we survey the annotation languages which we consider formative for the field. By investigating and comparing their individual strengths and limitations with respect to a set of pivotal criteria, we provide a coherent overview of the state of the art. Identifying open issues, we encourage further research. This way, our approach is orthogonal and complementary to a recent approach of Wilhelm et al. who provide a thorough survey of WCET analysis methods and tools that have been developed and used in academia and industry.  相似文献   
77.
IntroductionAll hospitals in the province of Styria (Austria) are well equipped with sophisticated Information Technology, which provides all-encompassing on-screen patient information. Previous research made on the theoretical properties, advantages and disadvantages, of reading from paper vs. reading from a screen has resulted in the assumption that reading from a screen is slower, less accurate and more tiring. However, recent flat screen technology, especially on the basis of LCD, is of such high quality that obviously this assumption should now be challenged. As the electronic storage and presentation of information has many advantages in addition to a faster transfer and processing of the information, the usage of electronic screens in clinics should outperform the traditional hardcopy in both execution and preference ratings.This study took part in a County hospital Styria, Austria, with 111 medical professionals, working in a real-life setting. They were each asked to read original and authentic diagnosis reports, a gynecological report and an internal medical document, on both screen and paper in a randomly assigned order. Reading comprehension was measured by the Chunked Reading Test, and speed and accuracy of reading performance was quantified. In order to get a full understanding of the clinicians' preferences, subjective ratings were also collected.ResultsWilcoxon Signed Rank Tests showed no significant differences on reading performance between paper vs. screen. However, medical professionals showed a significant (90%) preference for reading from paper. Despite the high quality and the benefits of electronic media, paper still has some qualities which cannot provided electronically do date.  相似文献   
78.
The display units integrated in today's head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display field of view (DFOV). A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. It has been shown that this distortion has the potential to affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks. In this paper, we analyze the user's perception of a VE displayed in a HMD, which is rendered with different GFOVs. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted two experiments to identify perspective projections for HMDs, which are identified as natural by subjects--even if these perspectives deviate from the perspectives that are inherently defined by the DFOV. In the first experiment, subjects had to adjust the GFOV for a rendered virtual laboratory such that their perception of the virtual replica matched the perception of the real laboratory, which they saw before the virtual one. In the second experiment, we displayed the same virtual laboratory, but restricted the viewing condition in the real world to simulate the limited viewing condition in a HMD environment. We found that subjects evaluate a GFOV as natural when it is larger than the actual DFOV of the HMD--in some cases up to 50 percent--even when subjects viewed the real space with a limited field of view.  相似文献   
79.
Multimedia delivery in mobile multiaccess network environments has emerged as a key area within the future Internet research domain. When network heterogeneity is coupled with the proliferation of multiaccess capabilities in mobile handheld devices, one can expect many new avenues for developing novel services and applications. New mechanisms for audio/video delivery over multiaccess networks will define the next generation of major distribution technologies, but will require significantly more information to operate according to their best potential. In this paper we present and evaluate a distributed information service, which can enhance media delivery over such multiaccess networks. We describe the proposed information service, which is built upon the new distributed control and management framework (DCMF) and the mobility management triggering functionality (TRG). We use a testbed which includes 3G/HSPA, WLAN and WiMAX network accesses to evaluate our proposed architecture and present results that demonstrate its value in enhancing video delivery and minimizing service disruption in an involved scenario.  相似文献   
80.
Automated video analysis lacks reliability when searching for unknown events in video data. The practical approach is to watch all the recorded video data, if applicable in fast-forward mode. In this paper we present a method to adapt the playback velocity of the video to the temporal information density, so that the users can explore the video under controlled cognitive load. The proposed approach can cope with static changes and is robust to video noise. First, we formulate temporal information as symmetrized Rényi divergence, deriving this measure from signal coding theory. Further, we discuss the animated visualization of accelerated video sequences and propose a physiologically motivated blending approach to cope with arbitrary playback velocities. Finally, we compare the proposed method with the current approaches in this field by experiments and a qualitative user study, and show its advantages over motion-based measures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号